Derivative free optimization method

نویسندگان

  • Tamás Terlaky
  • Katya Scheinberg
چکیده

Derivative free optimization (DFO) methods are typically designed to solve optimization problems whose objective function is computed by a “black box”; hence, the gradient computation is unavailable. Each call to the “black box” is often expensive, so estimating derivatives by finite differences may be prohibitively costly. Finally, the objective function value may be computed with some noise, and the finite differences estimates may not be accurate. All above properties, such as relatively expensive “black box” computations and presence of noise, are characteristic of the Cycle-Tempo optimization problems. However, it is the noise which creates most difficulty in applying gradient based methods to these problems. The derivative free optimization method which we use approximates the objective function explicitly without approximating its derivatives. The theoretical analysis presented below assumes that no noise is present. However, extensive experiments and intuition support the claim that robustness of DFO does not suffer from presence of moderate level of noise. Various other methods were developed recently to handle similar classes of problems (see [11], [12], [8], [6], [1]). In [1] a modification of a quasi-Newton method that accommodates noise in the objective function is considered. However, the analysis in [1] assumes that the level of noise reduces to zero when optimal solution is approached. This assumption is not realistic in case of Cycle-Tempo. The DFO method that we use here compares favorably with many other existing derivative free methods (both in speed and in accuracy). It is described in [2] and [3] and is essentially a combination of a trust region framework with quadratic interpolation of the objective function. Using polynomial interpolation models within trust region frameworks has been proposed earlier in [8], [6] and [5]. The main distinction of the method in [2] and [3] from these earlier methods is in the approach to handling the geometry of the interpolation set. In Subsection 3 we briefly describe the essence of this approach; for more detailed analysis the reader is referred to [2]. First, we describe the trust region framework.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Three-terms Conjugate Gradient Algorithm for Solving Large-Scale Systems of Nonlinear Equations

Nonlinear conjugate gradient method is well known in solving large-scale unconstrained optimization problems due to it’s low storage requirement and simple to implement. Research activities on it’s application to handle higher dimensional systems of nonlinear equations are just beginning. This paper presents a Threeterm Conjugate Gradient algorithm for solving Large-Scale systems of nonlinear e...

متن کامل

Numerical experience with a derivative-free trust-funnel method for nonlinear optimization problems with general nonlinear constraints

A trust-funnel method is proposed for solving nonlinear optimization problems with general nonlinear constraints. It extends the one presented by Gould and Toint (Math. Prog., 122(1):155196, 2010), originally proposed for equality-constrained optimization problems only, to problems with both equality and inequality constraints and where simple bounds are also considered. As the original one, ou...

متن کامل

An Active Appearance Model with a Derivative-Free Optimization

A new aam matching algorithm based on a derivative free optimization is presented and evaluated in this paper. For this, we use an efficient model-based optimization algorithm, the so called newuoa algorithm from M.J.D. Powell. We compare the new matching method performances against the standard one based on a fixed Jacobian matrix learned from a training set, and show significant improvements ...

متن کامل

Derivative-Free Optimization Via Proximal Point Methods

Derivative-Free Optimization (DFO) examines the challenge of minimizing (or maximizing) a function without explicit use of derivative information. Many standard techniques in DFO are based on using model functions to approximate the objective function, and then applying classic optimization methods on the model function. For example, the details behind adapting steepest descent, conjugate gradi...

متن کامل

Augmented Downhill Simplex a Modified Heuristic Optimization Method

Augmented Downhill Simplex Method (ADSM) is introduced here, that is a heuristic combination of Downhill Simplex Method (DSM) with Random Search algorithm. In fact, DSM is an interpretable nonlinear local optimization method. However, it is a local exploitation algorithm; so, it can be trapped in a local minimum. In contrast, random search is a global exploration, but less efficient. Here, rand...

متن کامل

Randomized Similar Triangles Method: A Unifying Framework for Accelerated Randomized Optimization Methods (Coordinate Descent, Directional Search, Derivative-Free Method)

In this paper, we consider smooth convex optimization problems with simple constraints and inexactness in the oracle information such as value, partial or directional derivatives of the objective function. We introduce a unifying framework, which allows to construct different types of accelerated randomized methods for such problems and to prove convergence rate theorems for them. We focus on a...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2000